Free Download Sign-up Form
* Email
First Name
* = Required Field


Mind Your Head Brain Training Book by Sue Stebbins and Carla Clark
New!
by Sue Stebbins &
Carla Clark

Paperback Edition

Kindle Edition

Are You Ready to Breakthrough to Freedom?
Find out
Take This Quiz

Business Breakthrough CDs

Over It Already

Amazing Clients
~ Ingrid Dikmen Financial Advisor, Senior Portfolio Manager


~ Mike M - Finance Professional

Social Media Sue Stebbins on Facebook

Visit Successwave's Blog!

Subscribe to the Successwaves RSS Feed

A Non-Symbolic Theory of Conscious Content- Imagery and Activity

Nigel J.T. Thomas

1 | 2 | 3 | 4

Page 3

Source: http://www.imagery-imagination.com/nonsym.htm

I should probably say here that, given the available length for this presentation, the theory of imagery I advocate, what I call Perceptual Activity Theory, can only be sketched in the barest outline. A more detailed account can be found in my article in Cognitive Science (Thomas, 1999). There I also argue for the theory on rather different grounds, and give a fairly extensive empirical and conceptual critique of the better-known rival theories.

But any workable theory of mental imagery is going to be parasitic upon a theory of perception. Almost certainly, they share mechanisms to a considerable degree. Not only is there a good deal of empirical evidence to support this (Farah, 1988; Kosslyn, 1994)(4); a phenomenological resemblance to perceptual experience is, surely, criterial for imagery (Finke, 1989; Thomas, 1997). Since the time of Alhazen visual perception has been seen as a matter of getting representations into the head (Lindberg, 1976). Originally, of course, these representations were optical images, and the problem was to understand how these got into eyes. However, once that process ceased to be mysterious, it became apparent that only the first step had been taken toward a full scientific understanding of vision. The almost universal response has been to try to understand how the meaningful content in the optical image gets itself further into the head, and is transformed into a more cognitively useful representational form. Thus we find the following given as a textbook definition of computer vision research:

Computer vision is the construction of explicit, meaningful descriptions of physical objects from images. (Ballard & Brown, 1982)(5)

From this point of view, percepts are inner representations, and we should thus expect mental images to be the same. The principal cognitive science debate about imagery, the so-called analog-propositional dispute, has been about the proper format for such representations. Pylyshyn (1978, 1981) argues that a discursive symbolic description is sufficient to account for the phenomenology and the experimental findings concerning imagery; Kosslyn (1980) argues that it is not sufficient, and that the symbolic description must be translated into a "quasi-pictorial" format when imagery is accessed.

Clearly if we are looking for a non-representationalist account of the mind (as the first part of this paper suggests we should) we will also need a different view of perception. Fortunately, there is an alternative at hand, already closely allied with the non-representational robotics movement (Scassellati, 1998). The so called Active Vision approach to machine perception (see, e.g., Bajcsy, 1988; Ballard, 1991; Aloimonos, 1992; Swain & Stricker, 1993; Landy, Maloney, & Pavel, 1996)(6) takes the primary task of perception to be to provide for the intelligent control of behavior in the environment, and it rejects the requirement of the building of inner descriptions in favor of a view wherein:

Visual sensory data is analyzed purposefully in order to answer specific queries posed by the [robot] observer. The observer constantly adjusts its vantage point in order to uncover the piece of information that is immediately most pressing. (Blake & Yuille, 1992).

In order to behave intelligently in its current situation a robot may need to determine certain specific, behaviorally relevant facts about its environment - Is there a clear path that way? How much further will I need to extend my arm in order to grasp this object? Etc. - and it is equipped with perceptual transducers in order to give it the capability of answering such questions. The transducers are actively deployed to find the answers. The machine does not first build up an inner representation of the environment and then query that; rather it queries the environment itself, when it needs specific information. The world is actively explored rather than passively registered. Furthermore, the answers obtained do not amount to representations in the traditional sense, but may rather be just a simple YES or NO ("There is/isn't a clear path") or perhaps some parametric value ("Extend the arm one third of its reach"): answers that are not meaningful in a context free sense, but only in relation to the question that elicited them.

It is useful to think of this querying or exploring of the environment as the performance of measurements or tests, and to think of the sensory organs, the transducers, as instruments used to perform them. Or rather, we should say, the transducers, the sense organs, comprise parts of instruments. What they actually test for in any given circumstance depends not only on what sort of energies they are capable of transducing, but also on how they are deployed: how they are oriented and moved relative to the environmental point of interest, how their sensitivity is dynamically modulated and calibrated, and how their output is analyzed. Thus an instrument comprises not just a sensor, but also the motor system (or musculature) that moves it, outward signal paths (or efferent nerves) that control these systems, and the central algorithm that controls this deployment and analyses the incoming signal. Thus a single sensor, according to the algorithm that is currently in control of it, may subserve many different perceptual instruments directed at ascertaining quite different sorts of facts about the environment. (Ballard (1991) calls this "sensor fission.")

Sometimes, as in haptic perception, a sensor may interact directly with the environmental object of interest. More often, perhaps, it will interact with some reliably correlated causal product of it: thus, visual instruments interact directly with structural features of the "optic array" of light (Gibson, 1979), but the usefulness of this is that such features are reliable indicators of the layout of the more tangible environment. In some cases the correlated causal product might be within the perceiver's body (heat in the flesh as correlated with a nearby fire, for example, or sound induced vibrations in the cochlea) or even within its brain (the map of the retinal image in V1 may be an example), but it would be a serious mistake to think of them, merely in virtue of such location, as percepts, or meaningful, potentially conscious representations, or even as computationally manipulable symbols (Thomas, 1999).

 

 

 

1 | 2 | 3 | 4

We Make it Easy to Succeed
Successwaves, Intl.
Brain Based Accelerated Success Audios

Successwaves Smart Coaching Audio